Goto

Collaborating Authors

 rubber duck


Super-sticky hydrogel is 10 times stronger than other glues underwater

New Scientist

A rubber duck that was stuck to a seaside rock for more than a year has proved the strength of a new sticky material. The adhesive could be used in deep-sea robots and repair work, or as surgical glue for medical procedures. "We developed a super-adhesive hydrogel that works extremely well even underwater – something very few materials can achieve," says Hailong Fan at Shenzhen University in China. Hydrogels are stretchy and soft materials. Fan, then at Hokkaido University in Japan, and his colleagues analysed 24,000 sticky protein sequences from many different organisms to identify the stickiest combinations of amino acids, the building blocks of proteins.


OpenToM: A Comprehensive Benchmark for Evaluating Theory-of-Mind Reasoning Capabilities of Large Language Models

Xu, Hainiu, Zhao, Runcong, Zhu, Lixing, Du, Jinhua, He, Yulan

arXiv.org Artificial Intelligence

Neural Theory-of-Mind (N-ToM), machine's ability to understand and keep track of the mental states of others, is pivotal in developing socially intelligent agents. However, prevalent N-ToM benchmarks have several shortcomings, including the presence of ambiguous and artificial narratives, absence of personality traits and preferences, a lack of questions addressing characters' psychological mental states, and limited diversity in the questions posed. In response to these issues, we construct OpenToM, a new benchmark for assessing N-ToM with (1) longer and clearer narrative stories, (2) characters with explicit personality traits, (3) actions that are triggered by character intentions, and (4) questions designed to challenge LLMs' capabilities of modeling characters' mental states of both the physical and psychological world. Using OpenToM, we reveal that state-of-the-art LLMs thrive at modeling certain aspects of mental states in the physical world but fall short when tracking characters' mental states in the psychological world.


Robot Duck Debugging: Can Attentive Listening Improve Problem Solving?

Parreira, Maria Teresa, Gillet, Sarah, Leite, Iolanda

arXiv.org Artificial Intelligence

While thinking aloud has been reported to positively affect problem-solving, the effects of the presence of an embodied entity (e.g., a social robot) to whom words can be directed remain mostly unexplored. In this work, we investigated the role of a robot in a "rubber duck debugging" setting, by analyzing how a robot's listening behaviors could support a thinking-aloud problem-solving session. Participants completed two different tasks while speaking their thoughts aloud to either a robot or an inanimate object (a giant rubber duck). We implemented and tested two types of listener behavior in the robot: a rule-based heuristic and a deep-learning-based model. In a between-subject user study with 101 participants, we evaluated how the presence of a robot affected users' engagement in thinking aloud, behavior during the task, and self-reported user experience. In addition, we explored the impact of the two robot listening behaviors on those measures. In contrast to prior work, our results indicate that neither the rule-based heuristic nor the deep learning robot conditions improved performance or perception of the task, compared to an inanimate object. We discuss potential explanations and shed light on the feasibility of designing social robots as assistive tools in thinking-aloud problem-solving tasks.


What You Never Knew About Attention Mechanisms

#artificialintelligence

This blog is written and maintained by students in the Master of Science in Professional Computer Science Program at Simon Fraser University as part of their course credit. To learn more about this unique program, please visit {sfu.ca/computing/mpcs}. Where are your eyes drawn to in this photo? Most of us will admit that our eyes are drawn to the blue duckling. To humans, the blue duckling sticks out like a sore thumb.


Weakly-supervised multi-class object localization using only object counts as labels

Mills, Kyle, Tamblyn, Isaac

arXiv.org Artificial Intelligence

We demonstrate the use of an extensive deep neural network to localize instances of objects in images. The EDNN is naturally able to accurately perform multi-class counting using only ground truth count values as labels. Without providing any conceptual information, object annotations, or pixel segmentation information, the neural network is able to formulate its own conceptual representation of the items in the image. Using images labelled with only the counts of the objects present, the structure of the extensive deep neural network can be exploited to perform localization of the objects within the visual field. We demonstrate that a trained EDNN can be used to count objects in images much larger than those on which it was trained. In order to demonstrate our technique, we introduce seven new datasets: five progressively harder MNIST digit-counting data sets, and two data sets of 3d-rendered rubber ducks in various situations. On most of these datasets, the EDNN achieves greater than 99% test set accuracy in counting objects.


Smart tech toys for your children

USATODAY - Tech Top Stories

Toys don't have to just entertain. They can help children learn, too. Columnist Jennifer Jolly shares her favorite smart tech toys. Now that it's officially fall, the chilly days of autumn are creeping in, outdoor playtime is waning, and the push for the hottest holiday toys is starting to really heat up. Before the "I've been such a good kid this year," even starts to go down, here's the scoop on what to look for.


Key Tips for a Machine Learning Beginner - Machine Philosopher

#artificialintelligence

When I first decided to focus on machine learning, I didn't really know what to expect. With experience already in python, I did my research and found that sklearn seemed to be a popular library. Within one hour, I had an up and running SVM fitted to some randomly-generated data, swimming in predictions galore. This was definitely not it! As I explored further, I realized that there was so much models and concepts in machine learning and I'd never felt so intimidated.


Won't you take me to Duckietown? MIT is using rubber ducks to test self-driving tech

#artificialintelligence

In order to make self-driving cars viable, the automotive industry has recruited some of the best software developers, hardware engineers, and mobility analysts humanity has to offer. There's a new community working to push autonomous technology forward, but these researchers aren't human at all. Buried deep within the halls of MIT's Computer Science and Artificial Intelligence Lab (CSAIL) lies a small suburb called Duckietown, a mock-up municipality used to test and develop driverless technology. Populated entirely by rubber ducks riding on autonomous robo-taxis, Duckietown is the culmination of a graduate-level class that could prove invaluable to automakers in the future. "We believe a tool like this will help create a common platform and language for researchers to build on," said CSAIL postdoctoral associate Liam Paull, who co-leads the Duckietown course.


Quacky races! Technology used in self-driving cars is being tested in toy taxis carrying rubber DUCKS around a tiny town

Daily Mail - Science & tech

This experiment may look quackers, but it is an important step in teaching engineers of the future to train self-driving vehicles to navigate a town. Experts have created'Duckietown' - a miniature town with complex road junctions that's home to up to 50 taxis'driven' by rubber ducks. The self-driving duck taxis are fitted with cameras that allow them to read road signs and avoiding crashing into obstacles. Experts have created'Duckietown,' - a miniature town with complex road junctions that's home to up to 50 taxis'driven' by rubber ducks (pictured above) Duckietown is the brainchild of computer scientists at MIT's Computer Science and Artificial Intelligence Lab (Csail) where students are taught about autonomous vehicle technologies using 50 duck-mobiles. As part of the class, students had to build a fleet of duckie-adorned robo-taxis that use a single camera to navigate. Unlike Google's experimental car, for example, the duck-mobiles do not rely on pre-programmed maps to find their way around a network of roads, instead navigating without any clues in real-time.